Skip to content

[AMDGPU] Optimize image sample followed by llvm.amdgcn.cvt.pkrtz into d16 variant #145203

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

harrisonGPU
Copy link
Contributor

This patch folds the pattern where the result of an image sample is passed to
llvm.amdgcn.cvt.pkrtz, and the packed result is accessed with
extractelement <2 x half>, index 0. In this case, we can emit the d16 variant
of the image intrinsic directly, improving code clarity, avoiding unnecessary
v_cvt_pk_rtz_f16_f32_e64 instructions, and reducing VGPR usage.

Folding is only applied if:

  • The image sample result has exactly one use: a call to amdgcn.cvt.pkrtz
  • The result of pkrtz has one use: an extractelement with index 0
  • We do not fold if index 1 is accessed, since it may contain poison

Example matched pattern:

%sample = call float @llvm.amdgcn.image.sample
%pack   = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample, float %any)
%h0     = extractelement <2 x half> %pack, i64 0

This gets folded into:

%sample = call half @llvm.amdgcn.image.sample

@harrisonGPU harrisonGPU self-assigned this Jun 22, 2025
@llvmbot llvmbot added backend:AMDGPU llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:transforms labels Jun 22, 2025
@llvmbot
Copy link
Member

llvmbot commented Jun 22, 2025

@llvm/pr-subscribers-llvm-transforms

@llvm/pr-subscribers-backend-amdgpu

Author: Harrison Hao (harrisonGPU)

Changes

This patch folds the pattern where the result of an image sample is passed to
llvm.amdgcn.cvt.pkrtz, and the packed result is accessed with
extractelement &lt;2 x half&gt;, index 0. In this case, we can emit the d16 variant
of the image intrinsic directly, improving code clarity, avoiding unnecessary
v_cvt_pk_rtz_f16_f32_e64 instructions, and reducing VGPR usage.

Folding is only applied if:

  • The image sample result has exactly one use: a call to amdgcn.cvt.pkrtz
  • The result of pkrtz has one use: an extractelement with index 0
  • We do not fold if index 1 is accessed, since it may contain poison

Example matched pattern:

%sample = call float @<!-- -->llvm.amdgcn.image.sample
%pack   = call &lt;2 x half&gt; @<!-- -->llvm.amdgcn.cvt.pkrtz(float %sample, float %any)
%h0     = extractelement &lt;2 x half&gt; %pack, i64 0

This gets folded into:

%sample = call half @<!-- -->llvm.amdgcn.image.sample

Full diff: https://github.com/llvm/llvm-project/pull/145203.diff

2 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp (+36)
  • (modified) llvm/test/Transforms/InstCombine/AMDGPU/image-d16.ll (+134)
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp b/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
index 5477c5eae9392..8a944c58a0ea7 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUInstCombineIntrinsic.cpp
@@ -247,6 +247,42 @@ simplifyAMDGCNImageIntrinsic(const GCNSubtarget *ST,
                                        ArgTys[0] = User->getType();
                                      });
         }
+
+        // Fold image.sample + cvt.pkrtz -> extractelement idx0 into a single
+        // d16 image sample.
+        // Pattern to match:
+        //   %sample = call float @llvm.amdgcn.image.sample...
+        //   %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample,
+        //   float %any)
+        //   %low = extractelement <2 x half> %pack, i64 0
+        // Replacement:
+        //   call half @llvm.amdgcn.image.sample
+        //
+        // Folding criteria:
+        //   1. The only user of the image.sample intrinsic is amdgcn.cvt.pkrtz.
+        //   2. That cvt.pkrtz call has exactly one use.
+        //   3. Its sole user is an extractelement instruction with index zero.
+        // Otherwise, folding is not performed, because D16 sampling only
+        // guarantees that the element at index 0 is defined; index 1 is
+        // undefined and using it will result in poison.
+        if (auto *CvtPkrtzCall = dyn_cast<CallInst>(User)) {
+          if (CvtPkrtzCall->getIntrinsicID() == Intrinsic::amdgcn_cvt_pkrtz &&
+              CvtPkrtzCall->hasOneUse()) {
+            // Unique use must be extractelement idx == 0
+            if (auto *Ext =
+                    dyn_cast<ExtractElementInst>(*CvtPkrtzCall->user_begin())) {
+              if (isa<ConstantInt>(Ext->getIndexOperand()) &&
+                  cast<ConstantInt>(Ext->getIndexOperand())->isZero()) {
+
+                return modifyIntrinsicCall(
+                    II, *CvtPkrtzCall, ImageDimIntr->Intr, IC,
+                    [&](auto &Args, auto &ArgTys) {
+                      ArgTys[0] = CvtPkrtzCall->getType();
+                    });
+              }
+            }
+          }
+        }
       }
 
       // Only perform D16 folding if every user of the image sample is
diff --git a/llvm/test/Transforms/InstCombine/AMDGPU/image-d16.ll b/llvm/test/Transforms/InstCombine/AMDGPU/image-d16.ll
index ee5ccf5af987d..f4f74a84bcb8b 100644
--- a/llvm/test/Transforms/InstCombine/AMDGPU/image-d16.ll
+++ b/llvm/test/Transforms/InstCombine/AMDGPU/image-d16.ll
@@ -239,6 +239,140 @@ main_body:
   ret bfloat %res
 }
 
+define amdgpu_ps float @image_sample_2d_single_pkrtz_to_d16(<8 x i32> %surf_desc, <4 x i32> %samp, float %u, float %v) {
+; GFX7-LABEL: @image_sample_2d_single_pkrtz_to_d16(
+; GFX7-NEXT:  main_body:
+; GFX7-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX7-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float [[SAMPLE]], float 0.000000e+00)
+; GFX7-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 0
+; GFX7-NEXT:    [[MUL:%.*]] = fmul reassoc arcp contract afn half [[H0]], [[H0]]
+; GFX7-NEXT:    [[DIV:%.*]] = fdiv reassoc arcp contract afn half [[MUL]], [[H0]]
+; GFX7-NEXT:    [[ADD:%.*]] = fadd reassoc arcp contract afn half [[DIV]], [[H0]]
+; GFX7-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX7-NEXT:    ret float [[RES]]
+;
+; GFX81PLUS-LABEL: @image_sample_2d_single_pkrtz_to_d16(
+; GFX81PLUS-NEXT:  main_body:
+; GFX81PLUS-NEXT:    [[SAMPLE:%.*]] = call half @llvm.amdgcn.image.sample.lz.2d.f16.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX81PLUS-NEXT:    [[MUL:%.*]] = fmul reassoc arcp contract afn half [[SAMPLE]], [[SAMPLE]]
+; GFX81PLUS-NEXT:    [[DIV:%.*]] = fdiv reassoc arcp contract afn half [[MUL]], [[SAMPLE]]
+; GFX81PLUS-NEXT:    [[ADD:%.*]] = fadd reassoc arcp contract afn half [[DIV]], [[SAMPLE]]
+; GFX81PLUS-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX81PLUS-NEXT:    ret float [[RES]]
+;
+main_body:
+  %sample = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float %u, float %v, <8 x i32> %surf_desc, <4 x i32> %samp, i1 false, i32 0, i32 0)
+  %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample, float 0.000000e+00)
+  %h0 = extractelement <2 x half> %pack, i64 0
+  %mul = fmul reassoc arcp contract afn half %h0, %h0
+  %div = fdiv reassoc arcp contract afn half %mul, %h0
+  %add = fadd reassoc arcp contract afn half %div, %h0
+  %res = fpext half %add to float
+  ret float %res
+}
+
+define amdgpu_ps float @image_sample_2d_pkrtz_variable_no_d16(<8 x i32> %surf_desc, <4 x i32> %samp, float %u, float %v) {
+; GFX7-LABEL: @image_sample_2d_pkrtz_variable_no_d16(
+; GFX7-NEXT:  main_body:
+; GFX7-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX7-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float [[SAMPLE]], float [[V]])
+; GFX7-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 0
+; GFX7-NEXT:    [[H1:%.*]] = extractelement <2 x half> [[PACK]], i64 1
+; GFX7-NEXT:    [[MUL:%.*]] = fmul half [[H0]], [[H1]]
+; GFX7-NEXT:    [[ADD:%.*]] = fadd half [[MUL]], [[H0]]
+; GFX7-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX7-NEXT:    ret float [[RES]]
+;
+; GFX81PLUS-LABEL: @image_sample_2d_pkrtz_variable_no_d16(
+; GFX81PLUS-NEXT:  main_body:
+; GFX81PLUS-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX81PLUS-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float [[SAMPLE]], float [[V]])
+; GFX81PLUS-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 0
+; GFX81PLUS-NEXT:    [[H1:%.*]] = extractelement <2 x half> [[PACK]], i64 1
+; GFX81PLUS-NEXT:    [[MUL:%.*]] = fmul half [[H0]], [[H1]]
+; GFX81PLUS-NEXT:    [[ADD:%.*]] = fadd half [[MUL]], [[H0]]
+; GFX81PLUS-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX81PLUS-NEXT:    ret float [[RES]]
+;
+main_body:
+  %sample = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float %u, float %v, <8 x i32> %surf_desc, <4 x i32> %samp, i1 false, i32 0, i32 0)
+  %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample, float %v)
+  %h0 = extractelement <2 x half> %pack, i64 0
+  %h1 = extractelement <2 x half> %pack, i64 1
+  %mul = fmul half %h0, %h1
+  %add = fadd half %mul, %h0
+  %res = fpext half %add to float
+  ret float %res
+}
+
+define amdgpu_ps float @image_sample_2d_pkrtz_constant_no_fold(<8 x i32> %surf_desc, <4 x i32> %samp, float %u, float %v) {
+; GFX7-LABEL: @image_sample_2d_pkrtz_constant_no_fold(
+; GFX7-NEXT:  main_body:
+; GFX7-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX7-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float [[SAMPLE]], float 0.000000e+00)
+; GFX7-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 0
+; GFX7-NEXT:    [[H1:%.*]] = extractelement <2 x half> [[PACK]], i64 1
+; GFX7-NEXT:    [[MUL:%.*]] = fmul half [[H0]], [[H1]]
+; GFX7-NEXT:    [[ADD:%.*]] = fadd half [[MUL]], [[H0]]
+; GFX7-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX7-NEXT:    ret float [[RES]]
+;
+; GFX81PLUS-LABEL: @image_sample_2d_pkrtz_constant_no_fold(
+; GFX81PLUS-NEXT:  main_body:
+; GFX81PLUS-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX81PLUS-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float [[SAMPLE]], float 0.000000e+00)
+; GFX81PLUS-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 0
+; GFX81PLUS-NEXT:    [[H1:%.*]] = extractelement <2 x half> [[PACK]], i64 1
+; GFX81PLUS-NEXT:    [[MUL:%.*]] = fmul half [[H0]], [[H1]]
+; GFX81PLUS-NEXT:    [[ADD:%.*]] = fadd half [[MUL]], [[H0]]
+; GFX81PLUS-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX81PLUS-NEXT:    ret float [[RES]]
+;
+main_body:
+  %sample = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float %u, float %v, <8 x i32> %surf_desc, <4 x i32> %samp, i1 false, i32 0, i32 0)
+  %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample, float 0.000000e+00)
+  %h0 = extractelement <2 x half> %pack, i64 0
+  %h1 = extractelement <2 x half> %pack, i64 1
+  %mul = fmul half %h0, %h1
+  %add = fadd half %mul, %h0
+  %res = fpext half %add to float
+  ret float %res
+}
+
+define amdgpu_ps float @image_sample_2d_single_pkrtz_high_no_d16(<8 x i32> %surf_desc, <4 x i32> %samp, float %u, float %v) {
+; GFX7-LABEL: @image_sample_2d_single_pkrtz_high_no_d16(
+; GFX7-NEXT:  main_body:
+; GFX7-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX7-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float 0.000000e+00, float [[SAMPLE]])
+; GFX7-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 1
+; GFX7-NEXT:    [[MUL:%.*]] = fmul reassoc arcp contract afn half [[H0]], [[H0]]
+; GFX7-NEXT:    [[DIV:%.*]] = fdiv reassoc arcp contract afn half [[MUL]], [[H0]]
+; GFX7-NEXT:    [[ADD:%.*]] = fadd reassoc arcp contract afn half [[DIV]], [[H0]]
+; GFX7-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX7-NEXT:    ret float [[RES]]
+;
+; GFX81PLUS-LABEL: @image_sample_2d_single_pkrtz_high_no_d16(
+; GFX81PLUS-NEXT:  main_body:
+; GFX81PLUS-NEXT:    [[SAMPLE:%.*]] = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float [[U:%.*]], float [[V:%.*]], <8 x i32> [[SURF_DESC:%.*]], <4 x i32> [[SAMP:%.*]], i1 false, i32 0, i32 0)
+; GFX81PLUS-NEXT:    [[PACK:%.*]] = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float 0.000000e+00, float [[SAMPLE]])
+; GFX81PLUS-NEXT:    [[H0:%.*]] = extractelement <2 x half> [[PACK]], i64 1
+; GFX81PLUS-NEXT:    [[MUL:%.*]] = fmul reassoc arcp contract afn half [[H0]], [[H0]]
+; GFX81PLUS-NEXT:    [[DIV:%.*]] = fdiv reassoc arcp contract afn half [[MUL]], [[H0]]
+; GFX81PLUS-NEXT:    [[ADD:%.*]] = fadd reassoc arcp contract afn half [[DIV]], [[H0]]
+; GFX81PLUS-NEXT:    [[RES:%.*]] = fpext half [[ADD]] to float
+; GFX81PLUS-NEXT:    ret float [[RES]]
+;
+main_body:
+  %sample = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 1, float %u, float %v, <8 x i32> %surf_desc, <4 x i32> %samp, i1 false, i32 0, i32 0)
+  %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float 0.000000e+00, float %sample)
+  %h0 = extractelement <2 x half> %pack, i64 1
+  %mul = fmul reassoc arcp contract afn half %h0, %h0
+  %div = fdiv reassoc arcp contract afn half %mul, %h0
+  %add = fadd reassoc arcp contract afn half %div, %h0
+  %res = fpext half %add to float
+  ret float %res
+}
+
 define amdgpu_ps half @image_gather4_2d_v4f32(<8 x i32> inreg %rsrc, <4 x i32> inreg %samp, half %s, half %t) {
 ; GFX7-LABEL: @image_gather4_2d_v4f32(
 ; GFX7-NEXT:  main_body:

Comment on lines +251 to +252
// Fold image.sample + cvt.pkrtz -> extractelement idx0 into a single
// d16 image sample.
Copy link

@DadSchoorse DadSchoorse Jun 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had a similar optimization in mesa and it broke tests because D16 does not round the same as a 32bit load followed by v_cvt_pk_f16_f32 for fixed point formats. D16 directly rounds towards nearest even in fp16, while the unoptimized pattern will first round towards nearest even in fp32, and then rounds towards zero in fp16.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Could you share some test cases for me to verify this issue? So far, I haven’t encountered the problem you mentioned. Since image sampling with D16 only uses the lower 16 bits, and I ensure that the first argument to cvt.pkrtz is the result of the image sample, I believe this folding is valid. I’ve tested a few cases and haven’t observed any issues so far, but I’ll continue testing more to be safe.
Additionally, we've used this optimization in other compilers (not LLVM) for years without encountering any problems.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mesa change was in https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28730, but sadly the CI pipeline data was already deleted, so finding the exact test case that failed back then isn't trivial. But it resulted in disabling D16 optimizations if a fp16 rounding mode is required.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I will test it again. :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this optimization is not OK unless we have some extra information saying that it's OK to use a different rounding mode. (But then if it's OK to use the normal round-to-nearest-even mode then why would you generate pkrtz instructions in the first place?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So do you mean that if the input data is in fixed-point format, it gets converted to fp32 using RTE, and then if we use pkrtz, it applies RTZ?

If we fold this into a D16 image sample, it will only apply RTZ, which could cause accuracy issues. Is that what you're saying?

But in the IR I can't see any data format info in the image instruction, and I haven’t found a case that clearly uses a fixed-point format.

Do you have an example? I'd like to verify it. By fixed-point formats, you mean things like UNORM, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DadSchoorse This PR will impact mesa? right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You do not know the format. It's a potentially runtime value in the resource descriptor. You do not have enough information to perform this transformation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I’m not sure how to achieve this format yet , I’m still thinking about the issue. Would it be possible to make this optimization optional?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For image_load and image_sample instructions specifically, there is no conversion from int to float or float to int. The result is int or float based on the in-memory type. So do you think there are still cases that this patch would handle incorrectly @DadSchoorse ?

@harrisonGPU harrisonGPU requested a review from nhaehnle June 22, 2025 07:44
Comment on lines +253 to +259
// Pattern to match:
// %sample = call float @llvm.amdgcn.image.sample...
// %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample,
// float %any)
// %low = extractelement <2 x half> %pack, i64 0
// Replacement:
// call half @llvm.amdgcn.image.sample
Copy link
Contributor

@jayfoad jayfoad Jul 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems OK, but do we need to handle more cases? E.g.

  • What if both inputs of cvt.pkrtz come from image.sample instructions?
  • What if image.sample returns <2 x float> or <4 x float> and all values are converted to f16?

Incidentally it would easier to implement the pattern matching if we provided a scalar half @llvm.amdgcn.cvt.rtz(float) intrinsic, instead of one intrinsic that does both the conversion and the packing.

Copy link
Contributor Author

@harrisonGPU harrisonGPU Jul 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Jay. I’ve thought about the cases you mentioned.

What if both inputs of cvt.pkrtz come from image.sample instructions?

Did you mean something like this?

define amdgpu_ps float @image_sample_2d_single_pkrtz_two_sample_no_d16(<8 x i32> %surf_desc, <4 x i32> %samp, float %u, float %v) {
entry:
  %sample1 = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 2, float %u, float %v, <8 x i32> %surf_desc, <4 x i32> %samp, i1 false, i32 0, i32 0)
  %sample2 = call float @llvm.amdgcn.image.sample.lz.2d.f32.f32.v8i32.v4i32(i32 2, float %u, float %v, <8 x i32> %surf_desc, <4 x i32> %samp, i1 false, i32 0, i32 0)
  %pack = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %sample1, float %sample2)
  %h0 = extractelement <2 x half> %pack, i64 0
  %h1 = extractelement <2 x half> %pack, i64 1
  %mul = fmul half %h0, %h1
  %div = fdiv half %mul, %h0
  %add = fadd half %div, %h1
  %res = fpext half %add to float
  ret float %res
}

But in fact LLPC only uses the low half, you can find the details in LLPC:

m_builder->CreateFpTruncWithRounding(inst->getOperand(0),.....

so the second operand is usually a constant 0.0.

What if image.sample returns <2 x float> or <4 x float> and all values are converted to f16?

It’s a similar case. Only the first result of each pkrtz is used. For example:

define amdgpu_ps float @image_sample_2d_single_pkrtz_d16(<8 x i32> %surf_desc, <4 x i32> %samp, i32 %u, i32 %v) {
entry:
  %0 = call reassoc arcp contract afn <4 x float> @llvm.amdgcn.image.load.2d.v4f32.i32.v8i32(i32 15, i32 %u, i32 %v, <8 x i32> %surf_desc, i32 0, i32 0)
  %1 = extractelement <4 x float> %0, i64 3
  %2 = extractelement <4 x float> %0, i64 2
  %3 = extractelement <4 x float> %0, i64 1
  %4 = extractelement <4 x float> %0, i64 0
  %5 = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %4, float 0.000000e+00)
  %6 = extractelement <2 x half> %5, i64 0
  %7 = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %3, float 0.000000e+00)
  %8 = extractelement <2 x half> %7, i64 0
  %9 = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %2, float 0.000000e+00)
  %10 = extractelement <2 x half> %9, i64 0
  %11 = call <2 x half> @llvm.amdgcn.cvt.pkrtz(float %1, float 0.000000e+00)
  %12 = extractelement <2 x half> %11, i64 0
  %mul1 = fmul reassoc arcp contract afn half %6, %8
  %mul2 = fmul reassoc arcp contract afn half %10, %12
  %add = fadd reassoc arcp contract afn half %mul1, %mul2
  %res = fpext half %add to float
  ret float %res
}

I plan to support this case in a follow-up patch after some refactoring: :#145312 (comment).

Do we have a @llvm.amdgcn.cvt.rtz(float) intrinsic? @llvm.amdgcn.cvt.pkrtz requires returning <2 x half>, so it’s not usable for scalar half values directly, right?

def int_amdgcn_cvt_pkrtz : ClangBuiltin<"__builtin_amdgcn_cvt_pkrtz">,
  DefaultAttrsIntrinsic<[llvm_v2f16_ty], [llvm_float_ty, llvm_float_ty],
            [IntrNoMem, IntrSpeculatable]
>;

Copy link
Contributor

@arsenm arsenm Jul 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@llvm llvm deleted a comment from GinShio Jul 7, 2025
@harrisonGPU harrisonGPU closed this Aug 9, 2025
@harrisonGPU harrisonGPU deleted the amdgpu/image-pkrtz-combine branch August 9, 2025 10:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:AMDGPU llvm:instcombine Covers the InstCombine, InstSimplify and AggressiveInstCombine passes llvm:transforms
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants